Research connecting text and images has recently seen several breakthroughs, with models like CLIP, DALL-E 2, and Stable Diffusion. However, the connection between text and other visual modalities, such as lidar data, has received less attention, prohibited by the lack of text-lidar datasets. In this work, we propose LidarCLIP, a mapping from automotive point clouds to a pre-existing CLIP embedding space. Using image-lidar pairs, we supervise a point cloud encoder with the image CLIP embeddings, effectively relating text and lidar data with the image domain as an intermediary. We show the effectiveness of LidarCLIP by demonstrating that lidar-based retrieval is generally on par with image-based retrieval, but with complementary strengths and weaknesses. By combining image and lidar features, we improve upon both single-modality methods and enable a targeted search for challenging detection scenarios under adverse sensor conditions. We also use LidarCLIP as a tool to investigate fundamental lidar capabilities through natural language. Finally, we leverage our compatibility with CLIP to explore a range of applications, such as point cloud captioning and lidar-to-image generation, without any additional training. We hope LidarCLIP can inspire future work to dive deeper into connections between text and point cloud understanding. Code and trained models available at https://github.com/atonderski/lidarclip.
translated by 谷歌翻译
Motion prediction systems aim to capture the future behavior of traffic scenarios enabling autonomous vehicles to perform safe and efficient planning. The evolution of these scenarios is highly uncertain and depends on the interactions of agents with static and dynamic objects in the scene. GNN-based approaches have recently gained attention as they are well suited to naturally model these interactions. However, one of the main challenges that remains unexplored is how to address the complexity and opacity of these models in order to deal with the transparency requirements for autonomous driving systems, which includes aspects such as interpretability and explainability. In this work, we aim to improve the explainability of motion prediction systems by using different approaches. First, we propose a new Explainable Heterogeneous Graph-based Policy (XHGP) model based on an heterograph representation of the traffic scene and lane-graph traversals, which learns interaction behaviors using object-level and type-level attention. This learned attention provides information about the most important agents and interactions in the scene. Second, we explore this same idea with the explanations provided by GNNExplainer. Third, we apply counterfactual reasoning to provide explanations of selected individual scenarios by exploring the sensitivity of the trained model to changes made to the input data, i.e., masking some elements of the scene, modifying trajectories, and adding or removing dynamic agents. The explainability analysis provided in this paper is a first step towards more transparent and reliable motion prediction systems, important from the perspective of the user, developers and regulatory agencies. The code to reproduce this work is publicly available at https://github.com/sancarlim/Explainable-MP/tree/v1.1.
translated by 谷歌翻译
蒙面自动编码已成为用于文本,图像和最近的点云的变压器模型的成功预训练范例。原始汽车数据集是适合自我监督预训练的合适候选者,因为与3D对象检测(OD)等任务的注释相比,它们通常便宜地收集。但是,开发点云的蒙版自动编码器仅关注合成和室内数据。因此,现有方法已将其表示和模型定制为小,密度且具有均匀点密度的点云。在这项工作中,我们在汽车环境中研究了蒙版的自动编码,该自动编码是稀疏的,并且点密度在同一场景中的对象之间可能会大不相同。为此,我们提出了Voxel-MAE,这是一种为体素表示设计的简单掩盖自动编码预训练方案。我们将基于变压器的3D对象检测器的骨干培养为重建掩盖的体素并区分空的和非空的体素。我们的方法将3D OD性能提高了1.75个地图点和1.05 nds的NUSCENES数据集。与现有的汽车数据自我监督方法相比,Voxel-Mae显示出$ 2 \ times $ $的性能提高。此外,我们表明,通过对Voxel-Mae进行预训练,我们仅需要40%的注释数据即可超过随机初始化的等效物。代码将发布。
translated by 谷歌翻译
准确的不确定性估计对于在安全关键系统中部署深层对象探测器至关重要。概率对象探测器的开发和评估受到现有绩效指标的缺点的阻碍,这些绩效指标倾向于涉及任意阈值或限制检测器的分布选择。在这项工作中,我们建议将对象检测视为设置预测任务,其中检测器预测对象集的分布。使用负面的对数可能性进行随机有限集,我们提出了一个适当的评分规则,用于评估和训练概率对象探测器。所提出的方法可以应用于现有的概率检测器,没有阈值,并可以在体系结构之间进行公平的比较。在可可数据集上评估了三种不同类型的检测器。我们的结果表明,现有检测器的培训已针对非稳定指标进行了优化。我们希望鼓励开发新的对象探测器,这些探测器可以准确估计自己的不确定性。代码可在https://github.com/georghess/pmb-nll上找到。
translated by 谷歌翻译
我们分析了旋转模糊性在应​​用于球形图像的卷积神经网络(CNN)中的作用。我们比较了被称为S2CNN的组等效网络的性能和经过越来越多的数据增强量的标准非等级CNN。所选的体系结构可以视为相应设计范式的基线参考。我们的模型对投影到球体的MNIST或FashionMnist数据集进行了训练和评估。对于固有旋转不变的图像分类的任务,我们发现,通过大大增加数据增强量和网络的大小,标准CNN可以至少达到与Equivariant网络相同的性能。相比之下,对于固有的等效性语义分割任务,非等级网络的表现始终超过具有较少参数的模棱两可的网络。我们还分析和比较了不同网络的推理潜伏期和培训时间,从而实现了对等效架构和数据扩展之间的详细权衡考虑,以解决实际问题。实验中使用的均衡球网络可在https://github.com/janegerken/sem_seg_s2cnn上获得。
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
Masked Image Modelling (MIM) has been shown to be an efficient self-supervised learning (SSL) pre-training paradigm when paired with transformer architectures and in the presence of a large amount of unlabelled natural images. The combination of the difficulties in accessing and obtaining large amounts of labeled data and the availability of unlabelled data in the medical imaging domain makes MIM an interesting approach to advance deep learning (DL) applications based on 3D medical imaging data. Nevertheless, SSL and, in particular, MIM applications with medical imaging data are rather scarce and there is still uncertainty. around the potential of such a learning paradigm in the medical domain. We study MIM in the context of Prostate Cancer (PCa) lesion classification with T2 weighted (T2w) axial magnetic resonance imaging (MRI) data. In particular, we explore the effect of using MIM when coupled with convolutional neural networks (CNNs) under different conditions such as different masking strategies, obtaining better results in terms of AUC than other pre-training strategies like ImageNet weight initialization.
translated by 谷歌翻译
Hyperspectral Imaging (HSI) provides detailed spectral information and has been utilised in many real-world applications. This work introduces an HSI dataset of building facades in a light industry environment with the aim of classifying different building materials in a scene. The dataset is called the Light Industrial Building HSI (LIB-HSI) dataset. This dataset consists of nine categories and 44 classes. In this study, we investigated deep learning based semantic segmentation algorithms on RGB and hyperspectral images to classify various building materials, such as timber, brick and concrete.
translated by 谷歌翻译
Point cloud analysis is receiving increasing attention, however, most existing point cloud models lack the practical ability to deal with the unavoidable presence of unknown objects. This paper mainly discusses point cloud analysis under open-set settings, where we train the model without data from unknown classes and identify them in the inference stage. Basically, we propose to solve open-set point cloud analysis using a novel Point Cut-and-Mix mechanism consisting of Unknown-Point Simulator and Unknown-Point Estimator modules. Specifically, we use the Unknown-Point Simulator to simulate unknown data in the training stage by manipulating the geometric context of partial known data. Based on this, the Unknown-Point Estimator module learns to exploit the point cloud's feature context for discriminating the known and unknown data. Extensive experiments show the plausibility of open-set point cloud analysis and the effectiveness of our proposed solutions. Our code is available at \url{https://github.com/ShiQiu0419/pointcam}.
translated by 谷歌翻译
We introduce a new benchmark dataset, Placenta, for node classification in an underexplored domain: predicting microanatomical tissue structures from cell graphs in placenta histology whole slide images. This problem is uniquely challenging for graph learning for a few reasons. Cell graphs are large (>1 million nodes per image), node features are varied (64-dimensions of 11 types of cells), class labels are imbalanced (9 classes ranging from 0.21% of the data to 40.0%), and cellular communities cluster into heterogeneously distributed tissues of widely varying sizes (from 11 nodes to 44,671 nodes for a single structure). Here, we release a dataset consisting of two cell graphs from two placenta histology images totalling 2,395,747 nodes, 799,745 of which have ground truth labels. We present inductive benchmark results for 7 scalable models and show how the unique qualities of cell graphs can help drive the development of novel graph neural network architectures.
translated by 谷歌翻译